emotional state
EQ-Negotiator: Dynamic Emotional Personas Empower Small Language Models for Edge-Deployable Credit Negotiation
Long, Yunbo, Liu, Yuhan, Brintrup, Alexandra
The deployment of large language models (LLMs) in automated negotiation has set a high performance benchmark, but their computational cost and data privacy requirements render them unsuitable for many privacy-sensitive, on-device applications such as mobile assistants, embodied AI agents or private client interactions. While small language models (SLMs) offer a practical alternative, they suffer from a significant performance gap compared to LLMs in playing emotionally charged complex personas, especially for credit negotiation. This paper introduces EQ-Negotiator, a novel framework that bridges this capability gap using emotional personas. Its core is a reasoning system that integrates game theory with a Hidden Markov Model(HMM) to learn and track debtor emotional states online, without pre-training. This allows EQ-Negotiator to equip SLMs with the strategic intelligence to counter manipulation while de-escalating conflict and upholding ethical standards. Through extensive agent-to-agent simulations across diverse credit negotiation scenarios, including adversarial debtor strategies like cheating, threatening, and playing the victim, we show that a 7B parameter language model with EQ-Negotiator achieves better debt recovery and negotiation efficiency than baseline LLMs more than 10 times its size. This work advances persona modeling from descriptive character profiles to dynamic emotional architectures that operate within privacy constraints. Besides, this paper establishes that strategic emotional intelligence, not raw model scale, is the critical factor for success in automated negotiation, paving the way for effective, ethical, and privacy-preserving AI negotiators that can operate on the edge.
- North America > Canada > Ontario > Toronto (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > United Kingdom > England > Greater London > London (0.04)
Intelligent Agents with Emotional Intelligence: Current Trends, Challenges, and Future Prospects
Zall, Raziyeh, Kheyrkhah, Alireza, Cambria, Erik, Naseri, Zahra, Kangavari, M. Reza
Developing intelligent agents that possess human-level intelligence is a key goal in the field of human-computer interaction (HCI) and general artificial intelligence[2]. A crucial aspect of achieving this goal is the incorporation of emotional intelligence, which is essential for human cognition and social interaction, into these intelligent agents. Emotional intelligence encompasses three interrelated capabilities: 1) emotion understanding, which involves accurately detecting and understanding affective signals, such as recognizing individuals' emotional states during interactions; 2) emotion elicitation and experiences, which refers to interpreting the causes, context, and implications of emotions for both the individual and the interaction; and 3) emotion expression, which encompasses the capacity to generate, modulate, and convey appropriate emotional responses in a socially meaningful manner. Affective Computing, coined by Rosalind Picard [1], emerged as a discipline dedicated to equipping machines with emotional intelligence, enabling them to recognize, interpret, and respond to human emotions. By embedding emotional intelligence into intelligent agents, affective computing facilitates more naturalistic, adaptive, and socially competent interactions, which in turn enhances user trust, engagement, and satisfaction [209]. Such emotionally intelligent systems not only improve usability but also enable advanced functionalities, including personalized assistance, empathetic dialogue, and context-aware decision-making. In Figure 1, an overview of the emotional intelligence capabilities in intelligent agents is presented. The process of emotional intelligence begins with analyzing the emotional aspects of the user input, enabling the agent to identify the user's affective state during interactions [259][306]. The next step is affective cognition, where the agent evaluates the observed emotional events using cognitive mental states to ensure accurate interpretation.
- Asia > Singapore (0.04)
- Asia > Middle East > Iran > Tehran Province > Tehran (0.04)
- Asia > China (0.04)
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- Research Report > Promising Solution (1.00)
- Overview (1.00)
- Research Report > New Finding (0.92)
EmoFeedback$^2$: Reinforcement of Continuous Emotional Image Generation via LVLM-based Reward and Textual Feedback
Jia, Jingyang, Shu, Kai, Yang, Gang, Xing, Long, Chen, Xun, Liu, Aiping
Continuous emotional image generation (C-EICG) is emerging rapidly due to its ability to produce images aligned with both user descriptions and continuous emotional values. However, existing approaches lack emotional feedback from generated images, limiting the control of emotional continuity. Additionally, their simple alignment between emotions and naively generated texts fails to adaptively adjust emotional prompts according to image content, leading to insufficient emotional fidelity. To address these concerns, we propose a novel generation-understanding-feedback reinforcement paradigm (EmoFeedback$^2$) for C-EICG, which exploits the reasoning capability of the fine-tuned large vision-language model (LVLM) to provide reward and textual feedback for generating high-quality images with continuous emotions. Specifically, we introduce an emotion-aware reward feedback strategy, where the LVLM evaluates the emotional values of generated images and computes the reward against target emotions, guiding the reinforcement fine-tuning of the generative model and enhancing the emotional continuity of images. Furthermore, we design a self-promotion textual feedback framework, in which the LVLM iteratively analyzes the emotional content of generated images and adaptively produces refinement suggestions for the next-round prompt, improving the emotional fidelity with fine-grained content. Extensive experimental results demonstrate that our approach effectively generates high-quality images with the desired emotions, outperforming existing state-of-the-art methods in our custom dataset. The code and dataset will be released soon.
EEG Emotion Recognition Through Deep Learning
Dolgopolyi, Roman, Chatzipanagiotou, Antonis
An advanced emotion classification model was developed using a CNN-Transformer architecture for emotion recognition from EEG brain wave signals, effectively distinguishing among three emotional states, positive, neutral and negative. The model achieved a testing accuracy of 91%, outperforming traditional models such as SVM, DNN, and Logistic Regression. Training was conducted on a custom dataset created by merging data from SEED, SEED-FRA, and SEED-GER repositories, comprising 1,455 samples with EEG recordings labeled according to emotional states. The combined dataset represents one of the largest and most culturally diverse collections available. Additionally, the model allows for the reduction of the requirements of the EEG apparatus, by leveraging only 5 electrodes of the 62. This reduction demonstrates the feasibility of deploying a more affordable consumer-grade EEG headset, thereby enabling accessible, at-home use, while also requiring less computational power. This advancement sets the groundwork for future exploration into mood changes induced by media content consumption, an area that remains underresearched. Integration into medical, wellness, and home-health platforms could enable continuous, passive emotional monitoring, particularly beneficial in clinical or caregiving settings where traditional behavioral cues, such as facial expressions or vocal tone, are diminished, restricted, or difficult to interpret, thus potentially transforming mental health diagnostics and interventions...
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
- North America > Canada (0.04)
- (3 more...)
- Research Report > New Finding (0.48)
- Research Report > Experimental Study (0.48)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Health Care Technology (0.94)
- Asia > China > Guangdong Province > Shenzhen (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Asia > Singapore (0.04)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- Research Report > Experimental Study (1.00)
- Overview (0.67)
- Education (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.92)
- Government (0.92)
Causally-Informed Reinforcement Learning for Adaptive Emotion-Aware Social Media Recommendation
Jain, Bhavika, Pitsko, Robert, Drishti, Ananya, Farooque, Mahfuza
Social media recommendation systems play a central role in shaping users' emotional experiences. However, most systems are optimized solely for engagement metrics, such as click rate, viewing time, or scrolling, without accounting for users' emotional states. Repeated exposure to emotionally charged content has been shown to negatively affect users' emotional well-being over time. We propose an Emotion-aware Social Media Recommendation (ESMR) framework that personalizes content based on users' evolving emotional trajectories. ESMR integrates a Transformer-based emotion predictor with a hybrid recommendation policy: a LightGBM model for engagement during stable periods and a reinforcement learning agent with causally informed rewards when negative emotional states persist. Through behaviorally grounded evaluation over 30-day interaction traces, ESMR demonstrates improved emotional recovery, reduced volatility, and strong engagement retention. ESMR offers a path toward emotionally aware recommendations without compromising engagement performance.
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Pennsylvania (0.04)
- (6 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Information Technology > Services (0.46)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
- North America > United States > District of Columbia > Washington (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > San Mateo County > Menlo Park (0.04)
- Europe > Belgium > Flanders > Flemish Brabant > Leuven (0.04)
- Leisure & Entertainment > Games (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.47)
Your Ride, Your Rules: Psychology and Cognition Enabled Automated Driving Systems
Despite rapid advances in autonomous driving technology, current autonomous vehicles (AVs) primarily respond to external traffic conditions and treat humans as passive occupants, lacking mechanisms for active adaptation and collaboration. This limitation c onstrains their ability to personalize driving behavior to human expectations and hinders effective navigation of ambiguous traffic scenarios that could benefit from leveraging the occupant's advanced cognitive input, resulting in increased delays and pote ntial safety risks. This inadequacy in the long term undermines occupant trust and hinder s the widespread adoption of AV technologies. This research is motivated to propose PACE - ADS (Psychology and Cognition Enabled Automated Driving Systems): a human - centered autonomy framework that enables AVs to sense, interpret, and respond to both external traffic conditions and internal occupant states. PACE - ADS is built on an agentic workflow where three foundation model agents collaborate: the Driver Age nt interprets the external environment; the Psychologist Agent decodes passive psychological signals ( e.g., facial expressions) and active cognitive inputs (e.g., verbal commands); and the Coordinator Agent synthesizes these inputs to generate high - level driving behavior decisions and parameters that enhance responsiveness in ambiguous scenarios and person alize the ride. PACE - ADS is designed to complement, rather than replace, conventional AV modules. It operates at the low - frequency semantic planning layer while delegating low - level, high - frequency control to the vehicle's native systems.
- Asia > China > Hubei Province > Wuhan (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Asia > China > Beijing > Beijing (0.04)
- Transportation > Ground > Road (1.00)
- Information Technology (1.00)
- Automobiles & Trucks (1.00)
Detecting Emotional Dynamic Trajectories: An Evaluation Framework for Emotional Support in Language Models
Tan, Zhouxing, Xiong, Ruochong, Wan, Yulong, Ma, Jinlong, Xue, Hanlin, Deng, Qichun, Jing, Haifeng, Zhang, Zhengtong, Liu, Depei, Luo, Shiyuan, Liu, Junfei
Emotional support is a core capability in human-AI interaction, with applications including psychological counseling, role play, and companionship. However, existing evaluations of large language models (LLMs) often rely on short, static dialogues and fail to capture the dynamic and long-term nature of emotional support. To overcome this limitation, we shift from snapshot-based evaluation to trajectory-based assessment, adopting a user-centered perspective that evaluates models based on their ability to improve and stabilize user emotional states over time. Our framework constructs a large-scale benchmark consisting of 328 emotional contexts and 1,152 disturbance events, simulating realistic emotional shifts under evolving dialogue scenarios. To encourage psychologically grounded responses, we constrain model outputs using validated emotion regulation strategies such as situation selection and cognitive reappraisal. User emotional trajectories are modeled as a first-order Markov process, and we apply causally-adjusted emotion estimation to obtain unbiased emotional state tracking. Based on this framework, we introduce three trajectory-level metrics: Baseline Emotional Level (BEL), Emotional Trajectory Volatility (ETV), and Emotional Centroid Position (ECP). These metrics collectively capture user emotional dynamics over time and support comprehensive evaluation of long-term emotional support performance of LLMs. Extensive evaluations across a diverse set of LLMs reveal significant disparities in emotional support capabilities and provide actionable insights for model development.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.48)
Words to Waves: Emotion-Adaptive Music Recommendation System
Chavali, Apoorva, Menezes, Reeve
Current recommendation systems often tend to overlook emotional context and rely on historical listening patterns or static mood tags. This paper introduces a novel music recommendation framework employing a variant of Wide and Deep Learning architecture that takes in real-time emotional states inferred directly from natural language as inputs and recommends songs that closely portray the mood. The system captures emotional contexts from user-provided textual descriptions by using transformer-based embeddings, which were finetuned to predict the emotional dimensions of valence-arousal. The deep component of the architecture utilizes these embeddings to generalize unseen emotional patterns, while the wide component effectively memorizes user-emotion and emotion-genre associations through cross-product features. Experimental results show that personalized music selections positively influence the user's emotions and lead to a significant improvement in emotional relevance.
- North America > United States > Virginia (0.04)
- Asia > Middle East > Israel (0.04)
- Research Report (0.70)
- Overview (0.46)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)